“”make narrow AI that helps you build molecular nanotechnology and waste all that potential by making every GPU melt”″
Could be done in other ways as to not waste much potential. But even if done that way, we have to balance out if such wasted potential outweighs the risks of unaligned AGI. Also, we wouldn’t be wasting our potential forever—only until alignment had been solved. Which is how things should be—AGI progress should be chasing alignment progress, not the other way around!
(And I don’t think anyone here has ever advocated for terrorism or WW3. We only said that it could turn out a lesser evil if it happened. But from that to advocating it goes a very long way.)
Your view seems to be, in short, “I don’t believe much in x-risk from unaligned AGI because I believe in MWI and therefore we would always survive in some branches”. There’s 2 very serious problems with that view:
First, your putting the outcome of something of massive importance on beliefs, such as MWI and computationalism, which for now are pretty much as uncertain as if God exists or not—even if it would be reasonable to assume, say, 50% probability to the existence of an intelligent creator, there is absolutely no proof in either direction. I.e., a totally uncertain belief. Not to mention that it’s also uncertain whether MWI and computationalism would save us from x-risk, or if what would be saved would in any way resemble our civilization.
Second, you’re only considering x-risk. Many of us are much more worried about s-risk, say like an unbreakable AI dictatorship a la With Folded Hands. And MWI won’t save us from that.
“Not building AGI or other transformative technologies dooms us to more prolonged local suffering and local death for many individuals”
You have no way of knowing that. What we can observe from history tells us the opposite. When we were primates, not much suffering was possible. When we invented fire and knives, we started burning and flaying people. When we invented civilization, we started prolonged tortures, wars and genocides. The greater the technology the more suffering becomes possible, so building unaligned transformative technologies you’re pretty much opening Pandora’s box. It would be far easier to cause suffering to Ems than to real modern world people, for instance.
“so while we shouldn’t rush into existential risk, I see it less worrisome than futures where you exist and suffer a lot,”
That sounds contradictory to what you had said before. If you acknowledge AI s-risk (that AI could bring futures with much more suffering than has ever happened), then you should prefer not building transformative technologies until we can solve alignment, or at least get a good grasp of it which we currently don’t.
“So both of these [nano and end] could be decades away or longer at the current sluggish pace we’re going. ML on the other hand, is getting bigger and bigger each day”
So you wanna risk building unaligned AGI just to get there a few decades faster? Again, looks like a way unbalanced risk/reward view to me.
“”make narrow AI that helps you build molecular nanotechnology and waste all that potential by making every GPU melt”″
Could be done in other ways as to not waste much potential. But even if done that way, we have to balance out if such wasted potential outweighs the risks of unaligned AGI. Also, we wouldn’t be wasting our potential forever—only until alignment had been solved. Which is how things should be—AGI progress should be chasing alignment progress, not the other way around!
(And I don’t think anyone here has ever advocated for terrorism or WW3. We only said that it could turn out a lesser evil if it happened. But from that to advocating it goes a very long way.)
Your view seems to be, in short, “I don’t believe much in x-risk from unaligned AGI because I believe in MWI and therefore we would always survive in some branches”. There’s 2 very serious problems with that view:
First, your putting the outcome of something of massive importance on beliefs, such as MWI and computationalism, which for now are pretty much as uncertain as if God exists or not—even if it would be reasonable to assume, say, 50% probability to the existence of an intelligent creator, there is absolutely no proof in either direction. I.e., a totally uncertain belief. Not to mention that it’s also uncertain whether MWI and computationalism would save us from x-risk, or if what would be saved would in any way resemble our civilization.
Second, you’re only considering x-risk. Many of us are much more worried about s-risk, say like an unbreakable AI dictatorship a la With Folded Hands. And MWI won’t save us from that.
“Not building AGI or other transformative technologies dooms us to more prolonged local suffering and local death for many individuals”
You have no way of knowing that. What we can observe from history tells us the opposite. When we were primates, not much suffering was possible. When we invented fire and knives, we started burning and flaying people. When we invented civilization, we started prolonged tortures, wars and genocides. The greater the technology the more suffering becomes possible, so building unaligned transformative technologies you’re pretty much opening Pandora’s box. It would be far easier to cause suffering to Ems than to real modern world people, for instance.
“so while we shouldn’t rush into existential risk, I see it less worrisome than futures where you exist and suffer a lot,”
That sounds contradictory to what you had said before. If you acknowledge AI s-risk (that AI could bring futures with much more suffering than has ever happened), then you should prefer not building transformative technologies until we can solve alignment, or at least get a good grasp of it which we currently don’t.
“So both of these [nano and end] could be decades away or longer at the current sluggish pace we’re going. ML on the other hand, is getting bigger and bigger each day”
So you wanna risk building unaligned AGI just to get there a few decades faster? Again, looks like a way unbalanced risk/reward view to me.